Can Graph Neural Networks Learn to Solve the MaxSAT Problem? (Student Abstract)

نویسندگان

چکیده

The paper presents an attempt to bridge the gap between machine learning and symbolic reasoning. We build graph neural networks (GNNs) predict solution of Maximum Satisfiability (MaxSAT) problem, optimization variant SAT. Two closely related representations are adopted, we prove their theoretical equivalence. also show that GNNs can achieve attractive performance solve hard MaxSAT problems in certain distributions even compared with state-of-the-art solvers through experimental evaluation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimization for Problem Classes – Neural Networks that Learn to Learn –

The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural networks – the ability of the network to efficiently solve related problems denoted as a class of prob...

متن کامل

Can artificial neural networks learn language models?

Currently, N-gram models are the most common and widely used models for statistical language modeling. In this paper, we investigated an alternative way to build language models, i.e., using artificial neural networks to learn the language model. Our experiment result shows that the neural network can learn a language model that has performance even better than standard statistical methods.

متن کامل

Recursive Neural Networks Can Learn Logical Semantics

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models— plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)—can ...

متن کامل

Applying Multiple Complementary Neural Networks to Solve Multiclass Classification Problem

In this paper, a multiclass classification problem is solved using multiple complementary neural networks. Two techniques are applied to multiple complementary neural networks which are one-against-all and error correcting output codes. We experiment our proposed techniques using an extremely imbalance data set named glass from the UCI machine learning repository. It is found that the combinati...

متن کامل

Recurrent Neural Networks Can Learn to Implement Symbol-Sensitive Counting

Recently researchers have derived formal complexity analysis of analog computation in the setting of discrete-time dynamical systems. As an empirical constrast, training recurrent neural networks (RNNs) produces self -organized systems that are realizations of analog mechanisms. Previous work showed that a RNN can learn to process a simple context-free language (CFL) by counting. Herein, we ext...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i13.26992